专利摘要:
The present invention relates to a secure parameter learning method of a convolutional neuron network, CNN, for data classification; the method comprising implementing by data processing means (11a) a first server (1a), steps of: (a0) receiving from a second server (1b) a database of learning already classified, said learning data being homomorphically encrypted; (a1) Learning in the encrypted domain, from said learning database, the parameters of a reference CNN comprising at least: a nonlinear layer (POLYNOME) operating a degree polynomial activation function at least two approximating an activation function; a batch normalization layer (BN) before each non-linear layer (POLYNOME); (a2) Transmission to said second server (1b) of the learned parameters for decryption and use in classification The present invention also relates to methods of securely classifying an input data item.
公开号:FR3057090A1
申请号:FR1659439
申请日:2016-09-30
公开日:2018-04-06
发明作者:Herve Chabanne;Jonathan Milgram;Constance Morel;Emmanuel Prouff
申请人:Safran Identity and Security SAS;
IPC主号:
专利说明:

GENERAL TECHNICAL AREA
The present invention relates to the field of supervised learning, and in particular to methods for secure learning of parameters of a convolutional neural network, or of classification of an input datum by means of a neural network. convolution.
STATE OF THE ART
Neural networks are widely used for the classification of data.
After an automatic learning phase (generally supervised, that is to say on an already classified reference database), a neural network "learns" and becomes all alone capable of applying the same classification to data unknown.
Convolutional neural networks, or CNNs (Convolutional Neural Networks) are a type of neural network in which the connection pattern between neurons is inspired by the visual cortex of animals. They are thus particularly suitable for a particular type of classification which is image analysis, they effectively allow the recognition of objects or people in images or videos, in particular in security applications (surveillance automatic, threat detection, etc.).
Today, CNNs are fully satisfactory, but since they are most often used on sensitive and confidential data (be it learning data or data to be classified), it would be desirable to secure them.
More precisely, the learning phase makes it possible to configure parameters of a CNN that are weights and biases.
If an entity A (for example a hospital) has the reference data allowing the learning phase (the patient data of hospital A), and a unit B (for example another hospital) has the data to be classified (the profile of a patient for whom B suspects a disease), then we find ourselves in a situation in which it would be necessary that:
- Either A provides B with the weights and biases determined by learning, which A does not want as this could reveal information about the learning base (his patients);
- Let B provide A with the data to be classified, which B does not want (because this would reveal information about his patient).
Similarly, if entity A does not in fact have sufficient computing power to allow the learning of weights and biases from its data, it should be requested from an entity C (for example a service provider), but A does not want C to have the learning base or specific weights and biases.
It is known to solve this type of problem what is called homomorphic encryption.
More precisely, a homomorphic function φ is a function such that, for a masking operation M like the multiplication by a mask datum a, there exists an operation O, like the exponentiation by a, such that Ο (φ (x) ) = φ (M (x)), i.e. (φ (x)) A a = φ (x * a). Such a function can also be homomorphic between two operations Op1 and Op2 if performing the operation Op2 on (cp (x), cp (y)) allows to obtain φ (x Op1 y).
A homomorphic cryptographic system then makes it possible to carry out certain mathematical operations on previously encrypted data instead of the clear data.
Thus, for a given calculation, it becomes possible to encrypt the data, make certain calculations associated with said given calculation on the encrypted data, and decrypt them, obtaining the same result as if we had made said given calculation directly on the data. clear. Advantageously, the associated calculation in the encrypted domain is the same calculation as that in the clears domain, but for other homomorphic ciphers it is for example necessary to multiply the figures to make an addition of the clears.
We call "fully homomorphic encryption" (FHE) when it allows identical addition and multiplication in the encrypted domain.
In the case of neural networks, we solve the problem of securing the classification by providing that:
- B implements a homomorphic encryption of the data to be classified, and transmits this encrypted data to A, who cannot read them;
- A implements the classification phase on the encrypted data and obtains the classification encryption, which it still cannot read, and returns to B;
- B decrypts the classification result. He was never able to access A's learning data
And we solve the problem of securing learning by providing that:
- A implements a homomorphic encryption of the reference data of the learning base, and transmits this encrypted data to C, which cannot read them;
- C implements the learning phase on the encrypted data and obtains the encrypted learned parameters, which he still cannot read, and returns to A;
- A decipher the weights and bias learned, in order to make the classification for itself or B.
However, a CNN generally contains four types of layers which process information successively:
- the convolution layer which deals with blocks of the image one after the other;
- the non-linear layer (also called correction layer) which improves the relevance of the result by applying an "activation function";
- the pooling layer (called "pooling") which makes it possible to group several neurons into a single neuron;
- the fully connected layer which connects all the neurons of a layer to all the neurons of the previous layer.
Currently, the most used non-linear layer activation function is the ReLU (Rectified Linear Unit) function which is equal to f (x) = max (0, x) and the pooling layer la most used is the MaxPool2 * 2 function which corresponds to a maximum between four values of a square (we pool four values into one).
The convolution layer, denoted CONV, and the fully connected layer, denoted FC, generally correspond to a scalar product between the neurons of the previous layer and the weights of the CNN.
Typical CNN architectures stack a few pairs of CONV RELU layers then add a MAXPOOL layer and repeat this scheme [(CONV RELU) P MAXPOOL] until a sufficiently small output vector is obtained, then end with two fully connected layers FC .
Here is a typical CNN architecture (an example of which is shown in Figure 2a):
INPUT -> [[CONV RELUf MAXPOOLf FC FC
However, homomorphic systems generally only allow work in the encrypted domain for operators + and x, which is not the case for the functions mainly used for non-linear and pooling layers, which precisely do not depend linearly on the parameters. input (in particular ReLU and Max Pool).
Several solutions have therefore been proposed to make CNNs compatible with homomorphic systems.
In the document Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin E. Lauter, Michael Naehrig, John Wernsing. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. IC ML 2016, the MaxPool function is replaced by a SumPool function and the ReLU function is replaced by the square function (f (x) = x 2 ).
Besides the fact that we lose the benefit of the most advanced ReLU function, the problem of training a CNN with the square function is that its derivative is not bounded. This can lead to strange behaviors when learning especially if the CNN is deep. This makes the initialization parameters very sensitive and therefore very difficult to choose. Therefore, this method is not optimal and limited to small, shallow CNNs.
In Qingchen Zhang, Laurence T. Yang, and Zhikui Chen. Privacy preserving deep computation model on cloud for big data feature learning. IEEE Trans. Computers, 65 (5): 1351-1362, 2016, the activation function (ReLU) is replaced by a degree three polynomial (Taylor approximation of the sigmoid function), and all of the pooling layers are removed.
This method still loses the benefit of the ReLU function, but somewhat limits the divergence problem that the square function had, even if it is still present. On the other hand, the subsequent need to remove the pooling layers greatly lengthens the processing time, which makes the method even more unsuitable for large CNNs.
It would therefore be desirable to have a new solution for learning the parameters of a CNN / data classification using CNN which is fully compatible with homomorphic encryption and does not limit the size of the CNN or its efficiency.
PRESENTATION OF THE INVENTION
According to a first aspect, the present invention relates to a method for secure learning of parameters of a convolutional neural network, CNN, for data classification;
the method comprising the implementation by data processing means of a first server, of steps of:
(aO) Reception from a second server of an already classified training database, said training data being encrypted in a homomorphic manner;
(a1) Learning in the encrypted domain, from said learning database, the parameters of a reference CNN comprising at least:
- a non-linear layer operating a polynomial function of degree at least two approximating an activation function;
- a batch normalization layer before each non-linear layer;
(a2) Transmission to the second server of the parameters learned, for decryption and use in classification.
According to a first variant of a second aspect, the present invention relates to a method for the secure classification of an input data, characterized in that it includes the implementation of steps of:
(a) Learning, by data processing means, of a first server from a learning database already classified, of the parameters of a reference convolutional neural network, CNN, comprising at least:
- a non-linear layer operating a polynomial function of degree at least two approximating an activation function;
- a batch normalization layer before each non-linear layer;
(b) Reception by data processing means of a second server from client equipment of said input data, encrypted in a homomorphic fashion;
(c) Classification by the data processing means of the second server in the encrypted domain of said encrypted input data, by means of the reference CNN;
(d) Transmission to the client equipment of the classification classification obtained, for decryption.
According to other advantageous and non-limiting characteristics:
• step (a) is implemented in accordance with the secure learning method according to the first aspect;
• said polynomial function approximating an activation function is determined before learning by polynomial regression of said activation function from points chosen randomly according to a given distribution;
• said polynomial function approximating an activation function is determined during learning, the coefficients of said polynomial function of degree at least two being part of the parameters learned.
According to a second variant of the second aspect, the present invention relates to a method for the secure classification of an input data, characterized in that it includes the implementation of steps of:
(a) Learning, by data processing means, of a first server from a learning database already classified, of the parameters of a reference convolutional neural network, CNN, comprising at least:
- a non-linear layer operating an activation function;
- a batch normalization layer before each non-linear layer;
the step comprising the determination of a polynomial function of degree at least two approximating said activation function;
(b) Reception by data processing means of a second server from client equipment of said input data, encrypted in a homomorphic fashion;
(c) Classification by the data processing means of the second server in the encrypted domain of said encrypted input data, by means of a substitution CNN using the parameters learned for the reference CNN and comprising instead of each non-linear layer operating the activation function, a non-linear layer operating said polynomial function of at least two determined degrees;
(d) Transmission to the client equipment of the classification classification obtained, for decryption.
According to other advantageous and non-limiting characteristics:
• said polynomial function is determined in step (a) by polynomial regression of said activation function from points chosen randomly according to a given distribution;
• step (a) comprises, following the determination of the parameters of the reference CNN, the implementation of at least one additional iteration of learning on the substitution CNN so as to adapt the parameters to said determined polynomial function;
• said polynomial function is determined in step (a) by polynomial regression of said activation function from points recovered at the input of one or more non-linear layers of the reference CNN;
• the reference CNN includes a convolution layer before each batch standardization layer;
• the reference CNN comprises at least one pooling layer operating a medium grouping type function, after a non-linear layer;
• the reference CNN includes at least one fully connected final layer;
• the reference CNN has an architecture [[CONV -> BN -> NL] P -> AVERAGEPOOLf FC FC or [[CONV BN POLYNOME] ”AVERAGEPOOLf FC FC;
• said activation function is of the Linear Rectification Unit, ReLU type;
• said polynomial function is of degree two or three, preferably two;
• said input or learning data are representative of images, said classification being an object recognition.
According to a third and a fourth aspect, the invention provides a computer program product comprising code instructions for the execution of a method according to the first or the second aspect of secure learning of parameters of a neural network. convolution, CNN, or secure classification of an input data; and a means of storage readable by a computer equipment on which a computer program product comprises code instructions for the execution of a method according to the first or the second aspect of secure learning of parameters of a neural network convolution, CNN, or secure classification of an input data.
PRESENTATION OF THE FIGURES
Other characteristics and advantages of the present invention will appear on reading the following description of a preferred embodiment. This description will be given with reference to the appended drawings in which:
- Figure 1 is a diagram of an architecture for the implementation of the methods according to the invention
- Figures 2a-2c show three examples of convolutional neural networks, respectively known, according to a first embodiment of the invention, and according to a second embodiment of the invention.
DETAILED DESCRIPTION
Architecture
According to two complementary aspects of the invention, the following are proposed:
- methods for secure learning of parameters of a convolutional neural network (CNN) for data classification; and
- secure classification methods of an input data (using a CNN, advantageously learned thanks to one of the first methods).
More specifically, learning and / or using CNN can be secure, i.e. carried out in the encrypted field, thanks to the present invention. We will see different embodiments of these two types of process.
These two types of process are implemented within an architecture as shown in FIG. 1, thanks to a first and / or a second server 1a, 1b. The first server 1a is the learning server (implementing the first method) and the second server 1b is a classification server (implementing the second method, it has the clear learning database) . It is quite possible that these two servers are confused, but the security offered by the present invention takes all its interest when they are distinct, ie when it is desirable that the parameters of the CNN and / or the learning base are not not communicated in clear from one to the other.
Each of these servers 1a, 1b is typically a remote computer equipment connected to a wide area network 2 such as the internet network for the exchange of data. Each comprises processor-type data processing means 11a, 11b (in particular the data processing means 11a of the first server have high computing power, since learning is long and complex compared to the simple use of the CNN learned), and where appropriate data storage means 12 such as a computer memory, for example a hard disk.
The memory 12 of the second server 1b stores said learning database, i.e. a set of data already classified (as opposed to the so-called input data that we are precisely trying to classify).
The architecture advantageously comprises one or more client devices 10, which can be any workstation (also connected to the network 2), preferably distinct from the servers 1a, 1b but which can be confused with one and / or the other of them. The client equipment 10 has one or more data to be classified, which it does not wish to communicate in clear to the servers 1a, 1b. The equipment operators are typically "customers" in the commercial sense of the term of the operator of the second server 1b.
Indeed, the input or training data are advantageously representative of images (said classification being an object recognition), and an example will be cited in which the client equipment 10 is connected to a security camera, and the operator entrusts the operator with the classification of the images (potentially confidential) taken by the camera.
According to a preferred embodiment (as represented in FIG. 1) combining the two types of methods according to the invention, the system comprises the first server 1a and the client equipment 10 each connected to the second equipment 1b via the network 20, and:
- The second server 1b transfers to the first server 1a the encrypted learning database;
the first server 1a exploits its computing power to implement a process for secure learning of parameters of a CNN from this encrypted learning database, and it transmits to the second server 1b the parameters learned from the CNN themselves encrypted;
- The second server 1b implements a secure classification method of an encrypted input data transmitted from the client equipment 10 using a CNN using the parameters retrieved from the first server 1a and decrypted;
- the second server 1b returns to the client equipment 10 the encrypted result of the classification.
However, it will be understood that it is possible to implement a secure classification from a CNN learned in a conventional manner (not secure) and vice versa (conventional classification from a CNN learned in a secure manner).
Safe learning process
According to a first aspect, the learning method is proposed, implemented by the data processing means 11a of the first server 1a.
In a first step (aO) already mentioned, they receive from the second server 1b the training database already classified, said training data being encrypted in a homomorphic manner.
Many homomorphic functions are known to the person skilled in the art, and the latter can take that of his choice, advantageously a "completely homomorphic" function, for example the BGV function (Brakerski, Gentry and Vaikuntanathan).
In a step (a1), from said learning database (encrypted), the first server learns securely, that is to say directly in the encrypted domain as explained (ie from the data d learning being encrypted), the parameters of a so-called reference CNN (as opposed to a substitution CNN, see below) comprising at least:
- a non-linear layer (which will be called "POLYNOME layer") operating a polynomial activation function of degree at least two approximating an activation function;
- a batch normalization layer (which will be called “BN layer” for batch normalization) before each non-linear POLYNOME layer.
The idea is to approximate the activation function (in particular a ReLU function, but it will be understood that other activation functions are possible such as the Heaviside function, even if in the following description we will take the example of ReLU) by a polynomial of degree at least two, advantageously at least three, and even more advantageously exactly two, so as to create a non-linear “substitution” layer POLYNOME, while adding the layer BN before this layer POLYNOME so as to have a reduced centered Gaussian distribution at the input of the POLYNOME layer, which prevents the divergence problem and allows an excellent local approximation of the activation function (since we reduce the approximation “domain” to this distribution , and no longer to the set of reals), in any case much better than with a square function or a sigmoid function as had been attempted, and this does not require r heavy calculation (especially when you stay in degree two).
BN layers are known in the world of CNNs but were until now only used to accelerate the learning of data (never for security purposes), and always in combination with nonlinear layers with activation function "as is "(Ie not approximate).
The polynomial function approximating the target activation function (advantageously ReLU) is determined:
- either before learning by polynomial regression of said activation function from points chosen randomly according to a given distribution (for example, a reduced centered Gaussian distribution or a uniform distribution on [-4; 4]);
- either during learning, the coefficients of said polynomial function of degree at least two being part of the parameters learned. Note that a different polynomial can be learned by non-linear layer.
As in the “non-secure” CNNs, the reference CNN obtained by means of the present process advantageously comprises a convolution layer CONV before each normalization layer in batch BN, and thus a pattern [CONV - ^ BN ^ POLYNOME] is repeated.
Likewise, the reference CNN advantageously comprises at least one pooling layer, preferably operating a function of the Average grouping type AveragePool (called AVERAGEPOOL layer), after a non-linear POLYNOME layer. This is an additional difference from the unsecured CNNs that preferred MaxPool after ReLU, and the secure CNNs of the prior art that proposed using SumPool. However, it will be understood that it is still possible to use SumPool.
Furthermore, conventionally, the reference CNN advantageously comprises at least one fully connected final FC layer, and preferably two.
In summary, the reference CNN learned preferably presents an architecture [[CONV BN POLYNOME AVERAGEPOOL FC FC as seen in Figure 2b.
Insofar as this reference CNN is compatible with homomorphic encryption, learning in encrypted domain works and makes it possible to obtain parameters themselves encrypted from CNN. In a final step (a2) of the learning process, these parameters learned from the CNN are transmitted to said second server 1b, for decryption and use in classification.
Secure classification process - first variant
According to a second aspect, the method for classifying an input data is proposed, implemented by the data processing means 11b of the second server 1b.
Two variants of this method are possible, but in all cases, the classification method comprises four main steps: in a first step (a) is implemented the learning of a reference CNN by the first server 1a, in a second step (b) is received from the client equipment 10 said input data, encrypted in a homomorphic fashion, in a third step (c) the data processing means 11b of the second server 1b classify said data in the encrypted domain encrypted input, and finally in a step (d) the encryption of the classification obtained is transmitted to said client equipment 10 for decryption.
According to the first variant of the classification process, the reference CNN learned in step (a) comprises at least:
- a non-linear POLYNOME layer operating a polynomial function of degree at least two approximating an activation function (ReLU as explained);
- a BN batch normalization layer before each nonlinear POLYNOME layer.
In other words, in this embodiment, the reference CNN conforms to a CNN obtained via the method according to the invention, with the only difference that it can possibly be obtained directly in the field of clears, it that is to say without homomorphic encryption of the training data. However, preferably, the learning is secure and conforms to the method according to the first aspect.
All the optional characteristics and advantages of the reference CNN described for the learning process can be transposed, in particular using pooling layers of the AveragePool type.
Similarly, the polynomial function approximating the target activation function is determined:
- either independently of the learning by polynomial regression of said activation function from points chosen randomly according to a given distribution (for example, a reduced centered Gaussian distribution or a uniform distribution on [-4; 4]);
- either during learning, the coefficients of said polynomial function of degree at least two being part of the parameters learned. Note that a different polynomial can be learned by non-linear layer.
In this variant, step (c) sees the classification in the encrypted field of said encrypted input data, directly by means of the reference CNN as learned.
Secure classification process - second variant
According to the second variant of the classification process, the reference CNN learned in step (a) comprises at least:
- a non-linear NL layer operating an activation function (as before in particular ReLU);
- a BN batch normalization layer before each nonlinear NL layer.
In other words, it is a traditional CNN where the activation function is not approximated. It is therefore understood that such a CNN cannot be learned by the method according to the first aspect of the invention, and cannot be used in the encrypted field.
However, with this difference, all the optional characteristics and advantages of the reference CNN described for the learning process can be transposed, in particular using pooling layers of the AveragePool type.
In summary, the reference CNN learned for this variant preferably presents an architecture [[CONV BN NLf AVERAGEPOOL] 11 FC -> FC as seen in FIG. 2c.
However, step (a) also comprises the determination of the polynomial function of degree at least two approximating said activation function. Indeed, as it stands, the reference CNN is not compatible with homomorphic encryption.
For that :
- either as in the first variant the polynomial function is determined in step (a) by polynomial regression of said activation function from points chosen randomly according to a given distribution, for example, a reduced centered Gaussian distribution or a distribution uniform on [-4; 4],
- either said polynomial function is determined in step (a) by polynomial regression of said activation function from points recovered at the input of one or more non-linear NL layers of the reference CNN.
In the first case, the polynomial is learned again independently of the learning base.
In the second case the polynomial can be either global (we recover the inputs of all the NL layers on the learning base for the reference CNN and we do the polynomial regression on this distribution), or associated with a layer and in this last case we obtain a polynomial per nonlinear layer approximating the activation function of the layer (for each NL layer we recover the inputs of this layer on the learning base for the reference CNN and we do for each NL layer the regression polynomial on the distribution of this layer).
Then, in this variant, step (c) sees the classification in the encrypted domain of said encrypted input data, by means of a CNN of said substitution, which is compatible with it with homomorphic encryption.
The substitution CNN uses the parameters learned for the reference CNN and includes, in place of each “real” non-linear NL layer operating the activation function, a non-linear POLYNOME layer operating said polynomial function of degree at least two determined (globally or determined for this NL layer).
In other words, each POLYNOME layer is a substitution layer for an NL layer. For example, for a reference CNN an architecture presenting the architecture [[CONV BN NL] P AVERAGEPOOLf -> FC -> FC mentioned above, the corresponding substitution CNN presents the architecture [[CONV BN POLYNOME AVERAGEPOOL
FC FC as seen in Figure 2b.
The substitution CNN obtained is then similar to a reference CNN as used by the first variant of the secure classification process, and / or that it could be obtained via the secure learning process according to the first aspect.
Note that prior to classification, step (a) may preferably include, following the determination of the parameters of the reference CNN, the implementation of at least one additional iteration of learning on the substitution CNN so as to adapt the parameters to said determined polynomial function.
Computer program product
According to a third and a fourth aspect, the invention relates to a computer program product comprising code instructions for execution (in particular on the data processing means 11a, 11b of the first or second server 1a, 1b) of a method according to the first aspect of the invention for secure learning of parameters of a CNN or a method according to the second aspect of the invention of secure classification of an input data item, as well as storage means readable by computer equipment (a memory of the first or second server 1a, 1b) on which this computer program product is found.
权利要求:
Claims (18)
[1" id="c-fr-0001]
1. Method for the secure learning of parameters of a convolutional neural network, CNN, for data classification;
the method comprising the implementation by data processing means (11a) of a first server (1a), of steps of:
(aO) Reception from a second server (1b) of an already classified training database, said training data being encrypted in a homomorphic manner;
(a1) Learning in the encrypted domain, from said learning database, the parameters of a reference CNN comprising at least:
- a non-linear layer (POLYNOME) operating a polynomial function of degree at least two approximating an activation function;
- a batch normalization layer (BN) before each non-linear layer (POLYNOME);
(a2) Transmission to the second server (1b) of the parameters learned, for decryption and use in classification.
[2" id="c-fr-0002]
2. Method for the secure classification of an input data, characterized in that it includes the implementation of steps of:
(a) Learning by data processing means (11a) of a first server (1a) from a training database already classified, of the parameters of a convolutional neural network, CNN, of reference, including at least:
- a non-linear layer (POLYNOME) operating a polynomial function of degree at least two approximating an activation function;
- a batch normalization layer (BN) before each non-linear layer (POLYNOME);
(b) Reception by data processing means (11b) of a second server (1b) from client equipment (10) of said input data, encrypted in a homomorphic fashion;
(c) Classification by the data processing means (11) of the second server (1b) in the encrypted domain of said encrypted input data, by means of the reference CNN;
(d) Transmission to said customer equipment (10) of the classification classification obtained, for decryption.
[3" id="c-fr-0003]
3. Method according to claim 2, in which step (a) is implemented in accordance with the secure learning method according to claim 1.
[4" id="c-fr-0004]
4. Method according to one of claims 1 to 3, wherein said polynomial function is determined before learning by polynomial regression of said activation function from points chosen randomly according to a given distribution.
[5" id="c-fr-0005]
5. Method according to one of claims 1 to 3, wherein said polynomial function is determined during training, the coefficients of said polynomial function of degree at least two being part of the parameters learned.
[6" id="c-fr-0006]
6. Method for the secure classification of an input data, characterized in that it includes the implementation of steps of:
(a) Learning by data processing means (11a) of a first server (1a) from a training database already classified, of the parameters of a convolutional neural network, CNN, of reference, including at least:
- a non-linear layer (NL) operating an activation function;
- a batch normalization layer (BN) before each non-linear layer (NL);
the step comprising the determination of a polynomial function of degree at least two approximating said activation function;
(b) Reception by data processing means (11b) of a second server (1b) from client equipment (10) of said input data, encrypted in a homomorphic fashion;
(c) Classification by the data processing means (11) of the second server (1b) in the encrypted domain of said encrypted input data, by means of a substitution CNN using the parameters learned for the reference CNN and comprising instead of each non-linear layer (NL) operating the activation function, a non-linear layer (POLYNOME) operating said polynomial function of at least two determined degree;
(d) Transmission to said customer equipment (10) of the classification classification obtained, for decryption.
[7" id="c-fr-0007]
7. The method of claim 6, wherein said polynomial function is determined in step (a) by polynomial regression of said activation function from points chosen randomly according to a given distribution.
[8" id="c-fr-0008]
8. The method of claim 6, wherein said polynomial function is determined in step (a) by polynomial regression of said activation function from points recovered at the input of one or more non-linear layers (NL) of the Reference CNN.
[9" id="c-fr-0009]
9. Method according to one of claims 7 and 8, in which step (a) comprises, following the determination of the parameters of the reference CNN, the implementation of at least one additional iteration of learning on the CNN of substitution so as to adapt the parameters to said determined polynomial function.
[10" id="c-fr-0010]
10. Method according to one of claims 1 to 9, in which the reference CNN comprises a convolution layer (CONV) before each batch normalization layer (BN).
[11" id="c-fr-0011]
11. Method according to one of claims 1 to 10, in which the reference CNN comprises at least one pooling layer operating a function of medium grouping type (AVERAGEPOOL), after a non-linear layer (NL, POLYNOME) .
[12" id="c-fr-0012]
12. Method according to one of claims 1 to 11, wherein the reference CNN comprises at least one final fully connected layer (FC).
[13" id="c-fr-0013]
13. Method according to claims 10, 11 and 12 in combination, wherein the reference CNN has an architecture [[CONV -> BN - NL AVERAGEPOOL] FC FC or [[CONV - + BN POLYNOME AVERAGEPOOL FC FC.
[14" id="c-fr-0014]
14. Method according to one of claims 1 to 13, wherein said activation function is of type Linear Rectification, ReLU.
[15" id="c-fr-0015]
15. Method according to one of claims 1 to 14, wherein said polynomial function is of degree two or three, preferably two.
[16" id="c-fr-0016]
16. Method according to one of claims 1 to 15, wherein said input or learning data are representative of images, said classification being an object recognition.
[17" id="c-fr-0017]
17. Computer program product comprising code instructions for the execution of a method according to one of claims 1 to 16 for secure learning of parameters of a convolutional neural network, CNN, or of secure classification of an input data.
[18" id="c-fr-0018]
18. Storage means readable by computer equipment on which a computer program product comprises code instructions for the execution of a method according to one of claims 1 to 16 for secure learning of network parameters of convolutional neurons, CNN, or of secure classification of an input data.
3057 (
类似技术:
公开号 | 公开日 | 专利标题
EP3301617B1|2021-05-19|Methods for secure learning of parameters of a convolutional neural network, and secure classification of input data
EP2795831B1|2016-03-09|Biometric identification using filtering and secure multi party computation
EP3270538B1|2018-09-26|Authentication method and system using confused circuits
EP2924609B1|2016-12-28|Method for enrolment of data in a database for the protection of said data
WO2018104686A1|2018-06-14|Method for secure classification using a transcryption operation
CA2743954C|2018-08-21|Identification or authorisation method, and associated system and secure module
EP3751468A1|2020-12-16|Method for collaborative learning of an artificial neural network without revealing learning data
EP2973210B1|2019-12-04|Secure data processing method, and use in biometry
EP2826200B1|2016-05-11|Method for encrypting a plurality of data in a secure set
WO2018096237A1|2018-05-31|Searchable encryption method
FR3076935A1|2019-07-19|METHODS OF LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK, AND CLASSIFYING AN INPUT DATA
FR3088467A1|2020-05-15|METHOD FOR CLASSIFYING A REPRESENTATIVE INPUT IMAGE OF A BIOMETRIC TRAIT USING A CONVOLUTIONAL NEURON NETWORK
FR2925730A1|2009-06-26|METHOD AND SYSTEM FOR AUTHENTICATING INDIVIDUALS FROM BIOMETRIC DATA
FR3095537A1|2020-10-30|CONFIDENTIAL DATA CLASSIFICATION METHOD AND SYSTEM
US20220012366A1|2022-01-13|Privacy-Preserving Image Distribution
EP3799347B1|2021-10-13|Securing of des encryption and reverse des decryption
WO2021229157A1|2021-11-18|Cryptographic method, systems and services for assessing univariate or multivariate true value functions on encrypted data
WO2021009364A1|2021-01-21|Method for identifying outlier data in a set of input data acquired by at least one sensor
EP3583518A1|2019-12-25|Method for information retrieval in an encrypted corpus stored on a server
FR3087033A1|2020-04-10|METHODS OF LEARNING PARAMETERS OF A CONVOLVED NEURON ARRAY AND DETECTING VISIBLE ELEMENTS OF INTEREST
FR3091373A1|2020-07-03|Computer-aided maintenance method and system
FR2982104A1|2013-05-03|METHOD AND SYSTEM FOR CONNECTING THEM WITH INFORMATION SETS RELATING TO A SAME PERSON
FR2926918A1|2009-07-31|METHOD AND SYSTEM FOR RESIZING DIGITAL IMAGES
同族专利:
公开号 | 公开日
EP3301617B1|2021-05-19|
EP3301617A1|2018-04-04|
US11003991B2|2021-05-11|
US20180096248A1|2018-04-05|
FR3057090B1|2018-10-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2016118206A2|2014-11-07|2016-07-28|Microsoft Technology Licensing, Llc|Neural networks for encrypted data|
US9092730B2|2011-08-11|2015-07-28|Greenray Industries, Inc.|Neural network frequency control and compensation of control voltage linearity|
US10311342B1|2016-04-14|2019-06-04|XNOR.ai, Inc.|System and methods for efficiently implementing a convolutional neural network incorporating binarized filter and convolution operation for performing image classification|
GB201607713D0|2016-05-03|2016-06-15|Imagination Tech Ltd|Convolutional neural network|
US20170337682A1|2016-05-18|2017-11-23|Siemens Healthcare Gmbh|Method and System for Image Registration Using an Intelligent Artificial Agent|US11032061B2|2018-04-27|2021-06-08|Microsoft Technology Licensing, Llc|Enabling constant plaintext space in bootstrapping in fully homomorphic encryption|
CN108804931B|2018-05-24|2020-06-30|成都大象分形智能科技有限公司|Neural network model encryption protection system and method related to domain transformation data encryption|
CN109117940B|2018-06-19|2020-12-15|腾讯科技(深圳)有限公司|Target detection method, device, terminal and storage medium based on convolutional neural network|
US11095428B2|2018-07-24|2021-08-17|Duality Technologies, Inc.|Hybrid system and method for secure collaboration using homomorphic encryption and trusted hardware|
KR102040120B1|2018-07-27|2019-11-05|주식회사 크립토랩|Apparatus for processing approximate encripted messages and methods thereof|
CN109165725A|2018-08-10|2019-01-08|深圳前海微众银行股份有限公司|Neural network federation modeling method, equipment and storage medium based on transfer learning|
CN111079911B|2018-10-19|2021-02-09|中科寒武纪科技股份有限公司|Operation method, system and related product|
KR101984730B1|2018-10-23|2019-06-03| 글루시스|Automatic predicting system for server failure and automatic predicting method for server failure|
GB2579040A|2018-11-15|2020-06-10|Camlin Tech Limited|Apparatus and method for creating and training artificial neural networks|
CN109889320A|2019-01-24|2019-06-14|中国人民武装警察部队工程大学|A kind of full homomorphic cryptography method of efficient BGV type multi-key cipher|
CN109902693A|2019-02-16|2019-06-18|太原理工大学|One kind being based on more attention spatial pyramid characteristic image recognition methods|
AU2019203863B2|2019-03-18|2021-01-28|Advanced New Technologies Co., Ltd.|Preventing misrepresentation of input data by participants in a secure multi-party computation|
CN111143878B|2019-12-20|2021-08-03|支付宝信息技术有限公司|Method and system for model training based on private data|
CN111324870A|2020-01-22|2020-06-23|武汉大学|Outsourcing convolution neural network privacy protection system based on safe two-party calculation|
GB2594453A|2020-04-24|2021-11-03|Thales Holdings Uk Plc|Methods and systems for training a machine learning model|
CN112132260B|2020-09-03|2021-04-20|深圳索信达数据技术有限公司|Training method, calling method, device and storage medium of neural network model|
法律状态:
2017-08-21| PLFP| Fee payment|Year of fee payment: 2 |
2018-04-06| PLSC| Search report ready|Effective date: 20180406 |
2018-08-22| PLFP| Fee payment|Year of fee payment: 3 |
2019-08-20| PLFP| Fee payment|Year of fee payment: 4 |
2020-08-19| PLFP| Fee payment|Year of fee payment: 5 |
2021-08-19| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
申请号 | 申请日 | 专利标题
FR1659439A|FR3057090B1|2016-09-30|2016-09-30|METHODS FOR SECURELY LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK AND SECURED CLASSIFICATION OF INPUT DATA|FR1659439A| FR3057090B1|2016-09-30|2016-09-30|METHODS FOR SECURELY LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK AND SECURED CLASSIFICATION OF INPUT DATA|
EP17306310.8A| EP3301617B1|2016-09-30|2017-10-02|Methods for secure learning of parameters of a convolutional neural network, and secure classification of input data|
US15/722,490| US11003991B2|2016-09-30|2017-10-02|Methods for secure learning of parameters of a convolution neural network, and for secure input data classification|
[返回顶部]